138 research outputs found
Recent Decisions
Many inverse problems can be described by a PDE model with unknown parameters
that need to be calibrated based on measurements related to its solution. This
can be seen as a constrained minimization problem where one wishes to minimize
the mismatch between the observed data and the model predictions, including an
extra regularization term, and use the PDE as a constraint. Often, a suitable
regularization parameter is determined by solving the problem for a whole range
of parameters -- e.g. using the L-curve -- which is computationally very
expensive. In this paper we derive two methods that simultaneously solve the
inverse problem and determine a suitable value for the regularization
parameter. The first one is a direct generalization of the Generalized Arnoldi
Tikhonov method for linear inverse problems. The second method is a novel
method based on similar ideas, but with a number of advantages for nonlinear
problems
Algorithm for the reconstruction of dynamic objects in CT-scanning using optical flow
Computed Tomography is a powerful imaging technique that allows
non-destructive visualization of the interior of physical objects in different
scientific areas. In traditional reconstruction techniques the object of
interest is mostly considered to be static, which gives artefacts if the object
is moving during the data acquisition. In this paper we present a method that,
given only scan results of multiple successive scans, can estimate the motion
and correct the CT-images for this motion assuming that the motion field is
smooth over the complete domain using optical flow. The proposed method is
validated on simulated scan data. The main contribution is that we show we can
use the optical flow technique from imaging to correct CT-scan images for
motion
Initialization of lattice Boltzmann models with the help of the numerical Chapman-Enskog expansion
We extend the applicability of the numerical Chapman-Enskog expansion as a
lifting operator for lattice Boltzmann models to map density and momentum to
distribution functions. In earlier work [Vanderhoydonc et al. Multiscale Model.
Simul. 10(3): 766-791, 2012] such an expansion was constructed in the context
of lifting only the zeroth order velocity moment, namely the density. A lifting
operator is necessary to convert information from the macroscopic to the
mesoscopic scale. This operator is used for the initialization of lattice
Boltzmann models. Given only density and momentum, the goal is to initialize
the distribution functions of lattice Boltzmann models. For this
initialization, the numerical Chapman-Enskog expansion is used in this paper.Comment: arXiv admin note: text overlap with arXiv:1108.491
Projected Newton Method for noise constrained Tikhonov regularization
Tikhonov regularization is a popular approach to obtain a meaningful solution
for ill-conditioned linear least squares problems. A relatively simple way of
choosing a good regularization parameter is given by Morozov's discrepancy
principle. However, most approaches require the solution of the Tikhonov
problem for many different values of the regularization parameter, which is
computationally demanding for large scale problems. We propose a new and
efficient algorithm which simultaneously solves the Tikhonov problem and finds
the corresponding regularization parameter such that the discrepancy principle
is satisfied. We achieve this by formulating the problem as a nonlinear system
of equations and solving this system using a line search method. We obtain a
good search direction by projecting the problem onto a low dimensional Krylov
subspace and computing the Newton direction for the projected problem. This
projected Newton direction, which is significantly less computationally
expensive to calculate than the true Newton direction, is then combined with a
backtracking line search to obtain a globally convergent algorithm, which we
refer to as the Projected Newton method. We prove convergence of the algorithm
and illustrate the improved performance over current state-of-the-art solvers
with some numerical experiments
Fast derivatives of likelihood functionals for ODE based models using adjoint-state method
We consider time series data modeled by ordinary differential equations
(ODEs), widespread models in physics, chemistry, biology and science in
general. The sensitivity analysis of such dynamical systems usually requires
calculation of various derivatives with respect to the model parameters.
We employ the adjoint state method (ASM) for efficient computation of the
first and the second derivatives of likelihood functionals constrained by ODEs
with respect to the parameters of the underlying ODE model. Essentially, the
gradient can be computed with a cost (measured by model evaluations) that is
independent of the number of the ODE model parameters and the Hessian with a
linear cost in the number of the parameters instead of the quadratic one. The
sensitivity analysis becomes feasible even if the parametric space is
high-dimensional.
The main contributions are derivation and rigorous analysis of the ASM in the
statistical context, when the discrete data are coupled with the continuous ODE
model. Further, we present a highly optimized implementation of the results and
its benchmarks on a number of problems.
The results are directly applicable in (e.g.) maximum-likelihood estimation
or Bayesian sampling of ODE based statistical models, allowing for faster, more
stable estimation of parameters of the underlying ODE model.Comment: 5 figure
Numerically Stable Recurrence Relations for the Communication Hiding Pipelined Conjugate Gradient Method
Pipelined Krylov subspace methods (also referred to as communication-hiding
methods) have been proposed in the literature as a scalable alternative to
classic Krylov subspace algorithms for iteratively computing the solution to a
large linear system in parallel. For symmetric and positive definite system
matrices the pipelined Conjugate Gradient method outperforms its classic
Conjugate Gradient counterpart on large scale distributed memory hardware by
overlapping global communication with essential computations like the
matrix-vector product, thus hiding global communication. A well-known drawback
of the pipelining technique is the (possibly significant) loss of numerical
stability. In this work a numerically stable variant of the pipelined Conjugate
Gradient algorithm is presented that avoids the propagation of local rounding
errors in the finite precision recurrence relations that construct the Krylov
subspace basis. The multi-term recurrence relation for the basis vector is
replaced by two-term recurrences, improving stability without increasing the
overall computational cost of the algorithm. The proposed modification ensures
that the pipelined Conjugate Gradient method is able to attain a highly
accurate solution independently of the pipeline length. Numerical experiments
demonstrate a combination of excellent parallel performance and improved
maximal attainable accuracy for the new pipelined Conjugate Gradient algorithm.
This work thus resolves one of the major practical restrictions for the
useability of pipelined Krylov subspace methods.Comment: 15 pages, 5 figures, 1 table, 2 algorithm
- …